Reflections on AI Safety Conferences

Over the past year, I've been to a total of five (5) conferences/workshops in and around the field of AI Safety, and more generally about high-impact careers. In order to get the most out of them, I want to write about my experiences, which may in the process prompt the reader to attend a future iteration of one. Would recommend!

EA Global - New York City 2025

Global Challenges Project Workshop

AISST/MAIA Technical Workshop

DC Policy Mini-Conference

As the first iteration of this concept, university AIS groups from the DC area (with the addition of GT AISI and Amherst College due to our incredibly organizing capacity) met at the Workshop House in DC for a 3-day conference exposing us to the state of AI Policy on the Hill and the careers that have impact in pushing through safe AI legislation.

The most impactful speakers for me were Lennart Heim, who gave a talk on Compute Governance, and Stephen Wiecek, who spoke about the State department's work on Pax Silica, a multi-lateral buyers club for western powers to incentivize investment into chip mining and foundational AI infrastructure development.

It seems to me that regulating compute may be one of the clearest ways to enforce pauses in frontier development, and at least understanding the compute stack has shifted my timelines slightly and increased my probability that the government could actually regulate development (shoutout the RAISE Act!). I also remember being surprised that the USG is extending outwards to actually implement something like Pax Silica, and enjoyed all the debate on whether or not this will actually work in the intended manner. Overall, much of the policy work in DC involves adapting your own mission statement and world model to the incentives that exist on the Hill in order to get *anything* passed.

I need to be able to work on ideas, write experiments, and argue for cases which don't have perceivable value if the world remains approximately the same as it is now. The reason I am okay contending with arguments about existential risk is that it's apparent that someone needs to plan for transformative yet low probability-mass events. This thinking is, in many ways, completely devalued in the machine of american government. There is a distinct and iterative loss in meaning when an idea travels from the Substack of a philosopher to the desks of the institutes for americans and policies and AI and security and still farther to the briefing room with a completely submerged congressional staffer who may then commission single-line changes in a bill that inevitably has a 30% chance of passing on Polymarket.

My thoughts are clearly not as well-informed as the speakers we heard, but I think the most policy work I am apt for is writing technical briefs and advising on language and formulae within relevant pieces of legislation. This is not to say that the people who believe in securing safe and aligned intelligence should not work in policy. I am merely providing an appraisal on the strength that I see necessary to work so close to the Hill with ideas as big as the reader's. I hope that the reader agrees that an impact can be made in passing forward-thinking legislation by anyone with the right ideas and blog posts.

ControlConf